7,974 research outputs found

    The macroeconomics of public sector deficits : a synthesis

    Get PDF
    Fiscal deficits have been at the forefront of macroeconomic adjustment in the 1980s, both in developing and developed countries. Fiscal deficits were blamed in good part for the assortment of ills that beset developing countries in the 1980s: over-indebtedness leading to the debt crisis beginning in 1982, high inflation, and poor investment and growth performance. This paper will examine the evidence for the macroeconomic effects of fiscal deficits, using the results of a set of ten case studies done for the Bank research project,"The Macroeconomics of Public Sector Deficits."The ten cases were Argentina, Chile, Colombia, Cote d'Ivoire, Ghana, Morocco, Mexico, Pakistan, Thailand and Zimbabwe. The authors first present a summary of the stylized facts on fiscal adjustment. Then the results are presented of the decomposition of the deficit in the case studies. The results of the case studies are then used to relate deficits to macro economic imbalances: first by analyzing the relationship of deficits to private consumption and investment; and second by examining the relationship of deficits to external imbalances. While macroeconomic imbalances are clearly interrelated, the authors assume the effects of such an interrelationship are small and use a sequence of partial equilibrium analysis of each type of imbalance.Public Sector Economics&Finance,Environmental Economics&Policies,Banks&Banking Reform,Economic Stabilization,Economic Theory&Research

    Confidence Statements for Efficiency Estimates from Stochastic Frontier Models

    Get PDF
    This paper is an empirical study of the uncertainty associated with estimates from stochastic frontier models. We show how to construct confidence intervals for estimates of technical efficiency levels under different sets of assumptions ranging from the very strong to the relatively weak. We demonstrate empirically how the degree of uncertainty associated with these estimates relates to the strength of the assumptions made and to various features of the data.Confidence intervals, stochastic frontier models, efficiency measurement

    Ejido reform and the NAFTA

    Get PDF
    North American Free Trade Agreement ; Mexico ; Agriculture - Mexico

    Spatial summation of individual cones in human color vision.

    Get PDF
    The human retina contains three classes of cone photoreceptors each sensitive to different portions of the visual spectrum: long (L), medium (M) and short (S) wavelengths. Color information is computed by downstream neurons that compare relative activity across the three cone types. How cone signals are combined at a cellular scale has been more difficult to resolve. This is especially true near the fovea, where spectrally-opponent neurons in the parvocellular pathway draw excitatory input from a single cone and thus even the smallest stimulus projected through natural optics will engage multiple color-signaling neurons. We used an adaptive optics microstimulator to target individual and pairs of cones with light. Consistent with prior work, we found that color percepts elicited from individual cones were predicted by their spectral sensitivity, although there was considerable variability even between cones within the same spectral class. The appearance of spots targeted at two cones were predicted by an average of their individual activations. However, two cones of the same subclass elicited percepts that were systematically more saturated than predicted by an average. Together, these observations suggest both spectral opponency and prior experience influence the appearance of small spots

    Directional solidification of superalloys

    Get PDF
    This invention relates to the directional solidification of superalloys, in particular nickel-based superalloys, by imposition of a predetermined temperature profile in the solidification front and, depending on the desired results, a predetermined rate of advance of said solidification front, whereas castings of markedly superior fatigue resistance are produced

    Sampling Errors and Confidence Intervals for Order Statistics: Implementing the Family Support Act

    Get PDF
    The Family Support Act allows states to reimburse child care costs up to the 75th percentile of local market price for child care. States must carry out surveys to estimate these 75th percentiles. This estimation problem raises two major statistical issues: (1) picking a sample design that will allow one to estimate the percentiles cheaply, efficiently and equitably; and (2) assessing the sampling variability of the estimates obtained. For Massa- chusetts, we developed a sampling design that equalized the standard errors of the estimated percentiles across 65 distinct local markets. This design was chosen because state administrators felt public day care providers and child advocates would find it equitable, thus limiting costly appeals. Estimation of standard errors for the sample 75th percentiles requires estimation of the density of the population at the 75th percentile. We implement and compare a number of parametric and nonparametric methods of density estimation. A kernel estimator provides the most reasonable estimates. On the basis of the mean integrated squared error criterion we selected the Epanechnikov kernel and the Sheather-Jones automatic bandwidth selection procedure. Because some of our sample sizes were too small to rely on asymptotics, we also constructed nonparametric confidence intervals using the hypergeometric distrition. For most of our samples, these confidence intervals were similar to those based on the asymptotic standard errors. Substantively we find wide variation in the price of child care, depending on the child's age, type of care and geographic location. For full-time care, the 75th percentiles ranged from 242perweekforinfantsinchildcarecentersinBostonto242 per week for infants in child care centers in Boston to 85 per week for family day care in western Massachusetts.

    Training for design of experiments using a catapult

    Get PDF
    Design of experiments (DOE) is a powerful approach for discovering a set of process (or design) variables which are most important to the process and then determine at what levels these variables must be kept to optimize the response (or quality characteristic) of interest. This paper presents two catapult experiments which can be easily taught to engineers and managers in organizations to train for design of experiments. The results of this experiment have been taken from a real live catapult experiment performed by a group of engineers in a company during the training program on DOE. The first experiment was conducted to separate out the key factors (or variables) from the trivial and the second experiment was carried out using the key factors to understand the nature of interactions among the key factors. The results of the experiment were analysed using simple but powerful graphical tools for rapid and easier understanding of the results to engineers with limited statistical competency
    • …
    corecore